165 research outputs found

    Recognising facial expressions in video sequences

    Full text link
    We introduce a system that processes a sequence of images of a front-facing human face and recognises a set of facial expressions. We use an efficient appearance-based face tracker to locate the face in the image sequence and estimate the deformation of its non-rigid components. The tracker works in real-time. It is robust to strong illumination changes and factors out changes in appearance caused by illumination from changes due to face deformation. We adopt a model-based approach for facial expression recognition. In our model, an image of a face is represented by a point in a deformation space. The variability of the classes of images associated to facial expressions are represented by a set of samples which model a low-dimensional manifold in the space of deformations. We introduce a probabilistic procedure based on a nearest-neighbour approach to combine the information provided by the incoming image sequence with the prior information stored in the expression manifold in order to compute a posterior probability associated to a facial expression. In the experiments conducted we show that this system is able to work in an unconstrained environment with strong changes in illumination and face location. It achieves an 89\% recognition rate in a set of 333 sequences from the Cohn-Kanade data base

    Conditional Infomax Learning: An Integrated Framework for Feature Extraction and Fusion

    Full text link
    Abstract. The paper introduces a new framework for feature learning in classification motivated by information theory. We first systematically study the information structure and present a novel perspective revealing the two key factors in information utilization: class-relevance and redun-dancy. We derive a new information decomposition model where a novel concept called class-relevant redundancy is introduced. Subsequently a new algorithm called Conditional Informative Feature Extraction is for-mulated, which maximizes the joint class-relevant information by explic-itly reducing the class-relevant redundancies among features. To address the computational difficulties in information-based optimization, we in-corporate Parzen window estimation into the discrete approximation of the objective function and propose a Local Active Region method which substantially increases the optimization efficiency. To effectively utilize the extracted feature set, we propose a Bayesian MAP formulation for feature fusion, which unifies Laplacian Sparse Prior and Multivariate Logistic Regression to learn a fusion rule with good generalization ca-pability. Realizing the inefficiency caused by separate treatment of the extraction stage and the fusion stage, we further develop an improved design of the framework to coordinate the two stages by introducing a feedback from the fusion stage to the extraction stage, which signifi-cantly enhances the learning efficiency. The results of the comparative experiments show remarkable improvements achieved by our framework.

    A fast ℓ1-solver and its applications to robust face recognition

    Get PDF
    In this paper we apply a recently proposed Lagrange Dual Method (LDM) to design a new Sparse Representation-based Classification (LDM-SRC) algorithm for robust face recognition problem. The proposed approach improves the efficiency of the SRC algorithm significantly. The proposed algorithm has the following advantages: (1) it employs the LDM ℓ1-solver to find solution of theℓ1-norm minimization problem, which is much faster than other state-of-the-art ℓ1-solvers, e.g. ℓ1-magic and ℓ1−ℓs . (2) The LDM ℓ1-solver utilizes a new Lagrange-dual reformulation of the original ℓ1-norm minimization problem, not only reducing the problem size when the dimension of training image data is much less than the number of training samples, but also making the dual problem become smooth and convex. Therefore it converts the non-smooth ℓ1-norm minimization problem into a sequence of smooth optimization problems. (3) The LDM-SRC algorithm can maintain good recognition accuracy whilst reducing the computational time dramatically. Experimental results are presented on some benchmark face databases

    Learning object representations from lighting variations

    No full text
    Abstract. Realistic representation of objects requires models which can synthesize the image of an object under all possible viewing conditions. We propose to learn these models from examples. Methods for learning surface geometry and albedo from one or more images under fixed posed and varying lighting conditions are described. Singular value decomposition (SVD) is used to determine shape, albedo, and lighting conditions up to an unknown 3×3 matrix, which is sufficient for recognition. The use of class-specific knowledge and the integrability constraint to determine this matrix is explored. We show that when the integrability constraint is applied to objects with varying albedo it leads to an ambiguity in depth estimation similar to the bas relief ambiguity. The integrability constraint, however, is useful for resolving ambiguities which arise in current photometric theories. Object Recognition Workshop. ECCV. 1996.

    Face Tracking and Recognition from Stereo Sequence

    No full text

    Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection

    No full text
    We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3-D linear subspace of the high dimensional image space -- if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's Linear Discriminant and produces well separated classes in a low-dimensional subspace even under severe variation in lighting and facial expressions. The Eigenfac
    corecore